Search results for "Square matrix"
showing 7 items of 7 documents
Spectral density of the correlation matrix of factor models: a random matrix theory approach.
2005
We studied the eigenvalue spectral density of the correlation matrix of factor models of multivariate time series. By making use of the random matrix theory, we analytically quantified the effect of statistical uncertainty on the spectral density due to the finiteness of the sample. We considered a broad range of models, ranging from one-factor models to hierarchical multifactor models.
On the Construction of Classes of Suffix Trees for Square Matrices: Algorithms and Applications
1996
AbstractWe provide a uniform framework for the study of index data structures for a two-dimensional matrixTEXT[1:n, 1:n] whose entries are drawn from an ordered alphabetΣ. An index forTEXTcan be informally seen as the two-dimensional analog of the suffix tree for a string. It allows on-line searches and statistics to be performed onTEXTby representing compactly theΘ(n3) square submatrices ofTEXTin optimalO(n2) space. We identify 4n−1families of indices forTEXT, each containing ∏ni=1(2i−1)! isomorphic data structures. We also develop techniques leading to a single algorithm that efficiently builds any index in any family inO(n2logn) time andO(n2) space. Such an algorithm improves in various …
Inverse eigenvalue problem for normal J-hamiltonian matrices
2015
[EN] A complex square matrix A is called J-hamiltonian if AT is hermitian where J is a normal real matrix such that J(2) = -I-n. In this paper we solve the problem of finding J-hamiltonian normal solutions for the inverse eigenvalue problem. (C) 2015 Elsevier Ltd. All rights reserved.
What is the Best Method of Matrix Adjustment? A Formal Answer by a Return to the World of Vectors
2003
The principle of matrix adjustment methods consists into finding what is the matrix which is the closest to an initial matrix but with respect of the column and row sum totals of a second matrix. In order to help deciding which matrix-adjustment method is the better, the article returns to the simpler problem of vector adjustment then back to matrices. The information-lost minimization (biproportional methods and RAS) leads to a multiplicative form and generalize the linear model. On the other hand, the distance minimization which leads to an additive form tends to distort the data by giving a result asymptotically independent to the initial matrix. The result allows concluding non-ambiguou…
Accuracy of stereotactic coordinate transformation using a localisation frame and computed tomographic imaging
1999
The accuracy of coordinate transformation from the computed tomographic (CT) space to the stereotactic frame space was analysed for frame-based stereotactic systems which use a localisation frame and coordinate transformation based on matrix calculation. The coordinate transformation was divided into three consecutive steps: (1) transforming the localisation frame into the CT image built up from pixels with distinct attenuation values, (2) determining the rod centres of the localisation frame in the CT image, and (3) coordinate transformation from the image to the frame space using the centres of the rods in the image space and algebraic, matrix-based calculation. The error contribution at …
The smallest singular value of a shifted $d$-regular random square matrix
2017
We derive a lower bound on the smallest singular value of a random d-regular matrix, that is, the adjacency matrix of a random d-regular directed graph. Specifically, let $$C_1<d< c n/\log ^2 n$$ and let $$\mathcal {M}_{n,d}$$ be the set of all $$n\times n$$ square matrices with 0 / 1 entries, such that each row and each column of every matrix in $$\mathcal {M}_{n,d}$$ has exactly d ones. Let M be a random matrix uniformly distributed on $$\mathcal {M}_{n,d}$$ . Then the smallest singular value $$s_{n} (M)$$ of M is greater than $$n^{-6}$$ with probability at least $$1-C_2\log ^2 d/\sqrt{d}$$ , where c, $$C_1$$ , and $$C_2$$ are absolute positive constants independent of any other parameter…
Parallel Construction and Query of Index Data Structures for Pattern Matching on Square Matrices
1999
AbstractWe describe fast parallel algorithms for building index data structures that can be used to gather various statistics on square matrices. The main data structure is the Lsuffix tree, which is a generalization of the classical suffix tree for strings. Given ann×ntext matrixA, we build our data structures inO(logn) time withn2processors on a CRCW PRAM, so that we can quickly processAin parallel as follows: (i) report some statistical information aboutA, e.g., find the largest repeated square submatrices that appear at least twice inAor determine, for each position inA, the smallest submatrix that occurs only there; (ii) given, on-line, anm×mpattern matrixPAT, check whether it occurs i…